128 research outputs found

    Hierarchical processing, editing and rendering of acquired geometry

    Get PDF
    La représentation des surfaces du monde réel dans la mémoire d’une machine peut désormais être obtenue automatiquement via divers périphériques de capture tels que les scanners 3D. Ces nouvelles sources de données, précises et rapides, amplifient de plusieurs ordres de grandeur la résolution des surfaces 3D, apportant un niveau de précision élevé pour les applications nécessitant des modèles numériques de surfaces telles que la conception assistée par ordinateur, la simulation physique, la réalité virtuelle, l’imagerie médicale, l’architecture, l’étude archéologique, les effets spéciaux, l’animation ou bien encore les jeux video. Malheureusement, la richesse de la géométrie produite par ces méthodes induit une grande, voire gigantesque masse de données à traiter, nécessitant de nouvelles structures de données et de nouveaux algorithmes capables de passer à l’échelle d’objets pouvant atteindre le milliard d’échantillons. Dans cette thèse, je propose des solutions performantes en temps et en espace aux problèmes de la modélisation, du traitement géométrique, de l’édition intéractive et de la visualisation de ces surfaces 3D complexes. La méthodologie adoptée pendant l’élaboration transverse de ces nouveaux algorithmes est articulée autour de 4 éléments clés : une approche hiérarchique systématique, une réduction locale de la dimension des problèmes, un principe d’échantillonage-reconstruction et une indépendance à l’énumération explicite des relations topologiques aussi appelée approche basée-points. En pratique, ce manuscrit propose un certain nombre de contributions, parmi lesquelles : une nouvelle structure hiérarchique hybride de partitionnement, l’Arbre Volume-Surface (VS-Tree) ainsi que de nouveaux algorithmes de simplification et de reconstruction ; un système d’édition intéractive de grands objets ; un noyau temps-réel de synthèse géométrique par raffinement et une structure multi-résolution offrant un rendu efficace de grands objets. Ces structures, algorithmes et systèmes forment une chaîne capable de traiter les objets en provenance du pipeline d’acquisition, qu’ils soient représentés par des nuages de points ou des maillages, possiblement non 2-variétés. Les solutions obtenues ont été appliquées avec succès aux données issues des divers domaines d’application précités.Digital representations of real-world surfaces can now be obtained automatically using various acquisition devices such as 3D scanners and stereo camera systems. These new fast and accurate data sources increase 3D surface resolution by several orders of magnitude, borrowing higher precision to applications which require digital surfaces. All major computer graphics applications can take benefit of this automatic modeling process, including: computer-aided design, physical simulation, virtual reality, medical imaging, architecture, archaeological study, special effects, computer animation and video games. Unfortunately, the richness of the geometry produced by these media comes at the price of a large, possibility gigantic, amount of data which requires new efficient data structures and algorithms offering scalability for processing such objects. This thesis proposes time and space efficient solutions for modeling, editing and rendering such complex surfaces, solving these problems with new algorithms sharing 4 fundamental elements: a systematic hierarchical approach, a local dimension reduction, a sampling-reconstruction paradigm and a pointbased basis. Basically, this manuscript proposes several contributions, including: a new hierarchical space subdivision structure, the Volume-Surface Tree, for geometry processing such as simplification and reconstruction; a streaming system featuring new algorithms for interactive editing of large objects, an appearancepreserving multiresolution structure for efficient rendering of large point-based surfaces, and a generic kernel for real-time geometry synthesis by refinement. These elements form a pipeline able to process acquired geometry, either represented by point clouds or non-manifold meshes. Effective results have been successfully obtained with data coming from the various applications mentioned

    Interactive Out-Of-Core Texturing

    Get PDF
    International audienceInteractive rendering of huge objects becomes available on common workstations thanks to highly optimized data-structures and out-of-core frameworks for rendering. However, interactive editing, and in particular interactive texturing of such objects, is still a challenging task, since the dynamic information added during this editing step would break any highly-optimized data-structures, such as GPU vertex buffers or specific out-of-core representations of huge objects. We propose Point-Sampled Textures (PST) for interactive texturing of large models at various scales without requiring 2D parameterization (complex and expensive for large models). This framework allows the user to interactively set any appearance property of the original object, from per-sample color to complex BRDFs

    A Flexible Kernel for Adaptive Mesh Refinement on GPU

    Get PDF
    International audienceWe present a flexible GPU kernel for adaptive on-the-fly refinement of meshes with arbitrary topology. By simply reserving a small amount of GPU memory to store a set of adaptive refinement patterns, on-the-fly refinement is performed by the GPU, without any preprocessing nor additional topology data structure. The level of adaptive refinement can be controlled by specifying a per-vertex depth-tag, in addition to usual position, normal, color and texture coordinates. This depth-tag is used by the kernel to instanciate the correct refinement pattern. Finally, the refined patch produced for each triangle can be displaced by the vertex shader, using any kind of geometric refinement, such as Bezier patch smoothing, scalar valued displacement, procedural geometry synthesis or subdivision surfaces. This refinement engine does neither require multi-pass rendering nor any use of fragment processing nor special preprocess of the input mesh structure. It can be implemented on any GPU with vertex shading capabilities

    Generic Mesh Refinement On GPU

    Get PDF
    International audienceMany recent publications have shown that a large variety of computation involved in computer graphics can be moved from the CPU to the GPU, by a clever use of vertex or fragment shaders. Nonetheless there is still one kind of algorithms that is hard to translate from CPU to GPU: mesh refinement techniques. The main reason for this, is that vertex shaders available on current graphics hardware do not allow the generation of additional vertices on a mesh stored in graphics hardware. In this paper, we propose a general solution to generate mesh refinement on GPU. The main idea is to define a generic refinement pattern that will be used to virtually create additional inner vertices for a given polygon. These vertices are then translated according to some procedural displacement map defining the underlying geometry (similarly, the normal vectors may be transformed according to some procedural normal map). For illustration purpose, we use a tesselated triangular pattern, but many other refinement patterns may be employed. To show its flexibility, the technique has been applied on a large variety of refinement techniques: procedural displacement mapping, as well as more complex techniques such as curved PN-triangles or ST-meshes

    Approximation of Subdivision Surfaces for Interactive Applications

    Get PDF
    International audienceIn this sketch, we propose a visually plausible approximation subdivision surfaces for interactive applications. The complete idea is discussed in the full paper "QAS: Realtime Quadratic Approximation of Subdivision Surfaces", published in the proceedings of Pacific Graphics 2007 and available online (http://iparla.labri.fr/publications/2007/BS07c/)

    QAS: Real-time Quadratic Approximation of Subdivision Surfaces.

    Get PDF
    International audienceWe introduce QAS, an efficient quadratic approximation of subdivision surfaces which offers a very close appearance compared to the true subdivision surface but avoids recursion, providing at least one order of magnitude faster rendering. QAS uses enriched polygons, equipped with edge vertices, and replaces them on-the-fly with low degree polynomials for interpolating positions and normals. By systematically projecting the vertices of the input coarse mesh at their limit position on the subdivision surface, the visual quality of the approximation is good enough for imposing only a single subdivision step, followed by our patch fitting, which allows real-time performances for million polygons output. Additionally, the parametric nature of the approximation offers an efficient adaptive sampling for rendering and displacement mapping. Last, the hexagonal support associated to each coarse triangle is adapted to geometry processors

    Interactive Out-Of-Core Texturing Using Point-Sampled Textures

    Get PDF
    International audienceThe visualization of huge 3D objects becomes available on common workstations thanks to highly optimized data-structures and out-of-core frameworks for rendering. However, the editing, and in particular, the texturing of such objects is still a challenging task, since usual methods for optimized rendering are not easily amenable to interactive modification. In this paper, we introduce the idea of point-sampled textures, and show how to interactively texture such a huge model at various scales, without any parameterization. An adaptive in-core point-based approximated geometry is first created by employing an efficient out-of-core point-sampling algorithm. This simplified geometry is then used for an interactive and multi-scale point-based texturing. Finally, a feature-preserving kernel is used to convert the point-based model into a global 3D texture which can be applied back on the initial huge geometry. Our technique thus provides a flexible tool to generate, edit and apply size-independent textures to a wide range of huge 3D objects thanks to point-based methods

    Rapid Visualization of Large Point-Based Surfaces

    Get PDF
    International audiencePoint-Based Surfaces can be directly generated by 3D scanners and avoid the generation and storage of an explicit topology for a sampled geometry, which saves time and storage space for very dense and large objects, such as scanned statues and other archaeological artefacts [Duguet 2004]. We propose a fast processing pipeline of large point-based surfaces for real-time, appearance preserving, polygonal rendering. Our goal is to reduce the time needed between a point set made of hundred of millions samples and a high resolution visualization taking benefit of modern graphics hardware, tuned for normal mapping of polygons. Our approach starts by an out-of-core generation of a coarse local triangulation of the original model. The resulting coarse mesh is enriched by applying a set of maps which capture the high frequency features of the original data set. We choose as an example the normal component of samples for these maps, since normal maps provide efficiently an accurate local illumination. But our approach is also suitable for other point attributes such as color or position (displacement map). These maps come also from an out-of-core process, using the complete input data in a streaming process. Sampling issues of the maps are addressed using an efficient diffusion algorithm in 2D. Our main contribution is to directly handle such large unorganized point clouds through this two pass algorithm, without the time-consuming meshing or parameterization step, required by current state-of-the-art high resolution visualization methods. One of the main advantages is to express most of the fine features present in the original large point clouds as textures in the huge texture memory usually provided by graphics devices, using only a lazy local parameterization. Our technique comes as a complementary tool to high-quality, but costly, out-of-core visualization systems. Direct applications are: interactive preview at high screen resolution of very detailed scanned objects such as scanned statues, inclusion of large point clouds in usual polygonal 3D engines and 3D databases browsing

    SIMOD: Making Freeform Deformation Size-Insensitive

    Get PDF
    International audienceFreeform deformation techniques are powerful and flexible tools for interactive 3D shape editing. However, while interactivity is the key constraint for the usability of such tools, it cannot be maintained when the complexity of either the 3D model or the applied deformation exceeds a given workstation-dependent threshold. In this system paper, we solve this scalability problem by introducing a streaming system based on a sampling-reconstruction approach. First an efficient out-of-core adaptive simplification algorithm is performed in a pre-processing step, to quickly generate a simplified version of the model. The resulting model can then be submitted to arbitrary FFD tools, as its reduced size ensures interactive response. Second, a post-processing step performs a feature-preserving reconstruction of the deformation undergone by the simplified version, onto the original model. Both bracketing steps share streaming and point-based basis, making them fully scalable and compatible with point-clouds, non-manifold polygon soups and meshes. Our system also offers a generic out-of-core multi-scale layer to arbitrary FFD tools, since the two bracketing steps remain available for partial upsampling during the interactive session. As a result, arbitrarily large 3D models can be interactively edited with most FFD tools, opening the use and combination of advanced deformation metaphors to models ranging from million to billion samples. Our system also offers to work on models that fit in memory but exceed the capabilities of a given FFD tool
    • …
    corecore